Skip to main content

The Contingency Table and Metrics

True +True -
Pred +TPFP (Type I Error)PPV/Precision
Pred -FN (Type II Error)TNNPV
Sensitivity / TPR / RecallSpecificity / TNRAccuracy

Metrics - Pred given Truth

We’re doing P(y^y)P(\hat{y}|y) with these.

Accuracy

How well overall did you do with the True labels?

Accuracy=TP+TNTP+TN+FP+FN\text{Accuracy} = \frac{TP + TN}{TP + TN + FP + FN}

Many caveats, the most important being imbalanced classes. If your labels are 90% “Doesn’t have Rare Disease” you’ll have a trained model that’s really good at saying you don’t have the disease. You’ll have super high accuracy. But is that a good metric?

Sensitivity / TPR / Recall

How many of the True Positives did you get right?

Sensitivity=TPTP+FN\text{Sensitivity} = \frac{TP}{TP + FN}

Specificity / TNR

How many of the True Negatives did you get right?

Specificity=TNTN+FP\text{Specificity} = \frac{TN}{TN + FP}

F1 Score

This is the Harmonic Mean of Precision and Recall.

F1=21Precision+1Recall=2×Precision×RecallPrecision+Recall=2TP2TP+FP+FN=TPTP+12(FP+FN)\begin{align*} F1 = \frac{2}{\frac{1}{\text{Precision}} + \frac{1}{\text{Recall}}} &= 2 \times \frac{\text{Precision} \times \text{Recall}}{\text{Precision} + \text{Recall}} \\ &= \frac{2TP}{2TP + FP + FN} \\ &= \frac{TP}{TP + \frac{1}{2}(FP + FN)} \end{align*}

So the F1 Score suffers when you fuck with a lot of FP and FNs (fucking up being that the arithmetic mean of the two is high). This is a good thing!

Now you can have a Macro F1 which calculates the score and averages them and treats all classes equally. Or you can do a Weighted F1 which accounts for class imbalance and weights scores based on instances. This last part may be better than Accuracy in some cases.

AUROC and ROC

ROC plots the TPR (Sensitivity) versus FPR (1 - Sensitivity) for each “operating point” or threshold. The area under the curve is AUROC and is [0,1]\in [0,1]. Hides information on Class Imbalance effects, Calibration, and Prevalence effects.

What does this tell you? It’s all about ranking. What is the probability that if I picked a (Sick Patient, Healthy Patient) tuple, my model will rank the sick patient higher?

Calibration

Small example. Imagine you have 10 patients. 4 actually have the condition (call them sick), 6 don’t (healthy). Your model gives each a risk score between 0 and 1. Here are the scores, sorted highest to lowest, with their true status:

PatientRisk ScoreTruly sick?
A0.95✅ sick
B0.88✅ sick
C0.72✅ sick
D0.65❌ healthy
E0.51✅ sick
F0.40❌ healthy
G0.33❌ healthy
H0.21❌ healthy
I0.15❌ healthy
J0.08❌ healthy

Now you look at (4 sick ×\times 6 healthy == 24) tuples. Looking at the table above, you have 23 correct answers (rankings!). Your AUROC is a solid 2324=0.96\frac{23}{24} = 0.96. Hooray! Not so fast. If you divide each Risk Score by 10, you will still get the same AUROC. That’s Calibration for you; this says nothing about calibration!

Using this to Pick Thresholds

You can pick a threshold1 using the curve but that’s not a nice thing to do in healthcare in general. You have to account for the cost of misses, cost of false alarms, resources, and so on. If only it were that simple.


Metrics - Truth given Pred

We’re flipping and doing P(yy^)P(y|\hat{y}) with these.

Positive Predictive Value / Precision

Depends on Prevalence (# of positive cases)

Prevalence=TPTP+FP\text{Prevalence} = \frac{TP}{TP + FP}

Negative Predictive Value

Doesn’t come up much really.

NPV=TNTN+FN\text{NPV} = \frac{TN}{TN + FN}

Other Metrics

Risk Ratio

Risk Ratio is the probability of the outcome in the exposed divided by the probability of outcome in unexposed group.

RR=TPTP+FNFPFP+TNRR = \frac{\frac{TP}{TP+FN}}{\frac{FP}{FP + TN}}

Odds Ratio

OR=TP/FNFP/TNOR = \frac{TP/FN}{FP/TN}

Odds of a probability are p1p\frac{p}{1 - p}. Odds are stationed around 1. If it’s 1.5, 50% more, if it’s 0.7, 30% less (“protective” in some cases.)

Footnotes

  1. As simple as picking the highest value of (Sensitivity+Specificity1)(\text{Sensitivity} + \text{Specificity} - 1) 🥳